7 research outputs found

    Affordance-Based Grasping Point Detection Using Graph Convolutional Networks for Industrial Bin-Picking Applications

    Get PDF
    Grasping point detection has traditionally been a core robotic and computer vision problem. In recent years, deep learning based methods have been widely used to predict grasping points, and have shown strong generalization capabilities under uncertainty. Particularly, approaches that aim at predicting object affordances without relying on the object identity, have obtained promising results in random bin-picking applications. However, most of them rely on RGB/RGB-D images, and it is not clear up to what extent 3D spatial information is used. Graph Convolutional Networks (GCNs) have been successfully used for object classification and scene segmentation in point clouds, and also to predict grasping points in simple laboratory experimentation. In the present proposal, we adapted the Deep Graph Convolutional Network model with the intuition that learning from n-dimensional point clouds would lead to a performance boost to predict object affordances. To the best of our knowledge, this is the first time that GCNs are applied to predict affordances for suction and gripper end effectors in an industrial bin-picking environment. Additionally, we designed a bin-picking oriented data preprocessing pipeline which contributes to ease the learning process and to create a flexible solution for any bin-picking application. To train our models, we created a highly accurate RGB-D/3D dataset which is openly available on demand. Finally, we benchmarked our method against a 2D Fully Convolutional Network based method, improving the top-1 precision score by 1.8% and 1.7% for suction and gripper respectively.This Project received funding from the European Union’s Horizon 2020 research and Innovation Programme under grant agreement No. 780488

    Camera Pose Optimization for 3D Mapping

    Get PDF
    Digital 3D models of environments are of great value in many applications, but the algorithms that build them autonomously are computationally expensive and require a considerable amount of time to perform this task. In this work, we present an active simultaneous localisation and mapping system that optimises the pose of the sensor for the 3D reconstruction of an environment, while a 2D Rapidly-Exploring Random Tree algorithm controls the motion of the mobile platform for the ground exploration strategy. Our objective is to obtain a 3D map comparable to that obtained using a complete 3D approach in a time interval of the same order of magnitude of a 2D exploration algorithm. The optimisation is performed using a ray-tracing technique from a set of candidate poses based on an uncertainty octree built during exploration, whose values are calculated according to where they have been viewed from. The system is tested in diverse simulated environments and compared with two different exploration methods from the literature, one based on 2D and another one that considers the complete 3D space. Experiments show that combining our algorithm with a 2D exploration method, the 3D map obtained is comparable in quality to that obtained with a pure 3D exploration procedure, but demanding less time.This work was supported in part by the Project ‘‘5R-Red Cervera de Tecnologías Robóticas en Fabricación Inteligente,’’ through the ‘‘Centros Tecnológicos de Excelencia Cervera’’ Program funded by the ‘‘Centre for the Development of Industrial Technology (CDTI),’’ under Contract CER-20211007

    Active Mapping and Robot Exploration: A Survey

    Get PDF
    Simultaneous localization and mapping responds to the problem of building a map of the environment without any prior information and based on the data obtained from one or more sensors. In most situations, the robot is driven by a human operator, but some systems are capable of navigating autonomously while mapping, which is called native simultaneous localization and mapping. This strategy focuses on actively calculating the trajectories to explore the environment while building a map with a minimum error. In this paper, a comprehensive review of the research work developed in this field is provided, targeting the most relevant contributions in indoor mobile robotics.This research was funded by the ELKARTEK project ELKARBOT KK-2020/00092 of the Basque Government

    Dynamic mosaic planning for a robotic bin-packing system based on picked part and target box monitoring

    Get PDF
    This paper describes the dynamic mosaic planning method developed in the context of the PICKPLACE European project. The dynamic planner has allowed the development of a robotic system capable of packing a wide variety of objects without having to adjust to each reference. The mosaic planning system consists of three modules: First, the picked item monitoring module monitors the grabbed item to find out how the robot has picked it. At the same time, the destination container is monitored online to obtain the actual status of the packaging. To this end, we present a novel heuristic algorithm that, based on the point cloud of the scene, estimates the empty volume inside the container as empty maximal spaces (EMS). Finally, we present the development of the dynamic IK-PAL mosaic planner that allows us to dynamically estimate the optimal packing pose considering both the status of the picked part and the estimated EMSs. The developed method has been successfully integrated in a real robotic picking and packing system and validated with 7 tests of increasing complexity. In these tests, we demonstrate the flexibility of the presented system in handling a wide range of objects in a real dynamic packaging environment. To our knowledge, this is the first time that a complete online picking and packing system is deployed in a real robotic scenario allowing to create mosaics with arbitrary objects and to consider the dynamics of a real robotic packing system.This article has been funded by the European Union's Horizon 2020 research and Innovation Programme under grant agreement No. 780488, and the project "5R-Red Cervera de Tecnologias roboticas en fabricacion inteligente", contract number CER-20211007, under "Centros Tecnologicos de Excelencia Cervera" programme funded by "The Centre for the Development of Industrial Technology (CDTI)"

    A Generic ROS-Based Control Architecture for Pest Inspection and Treatment in Greenhouses Using a Mobile Manipulator

    Get PDF
    To meet the demands of a rising population greenhouses must face the challenge of producing more in a more efficient and sustainable way. Innovative mobile robotic solutions with flexible navigation and manipulation strategies can help monitor the field in real-time. Guided by Integrated Pest Management strategies, robots can perform early pest detection and selective treatment tasks autonomously. However, combining the different robotic skills is an error prone work that requires experience in many robotic fields, usually deriving on ad-hoc solutions that are not reusable in other contexts. This work presents Robotframework, a generic ROS-based architecture which can easily integrate different navigation, manipulation, perception, and high-decision modules leading to a faster and simplified development of new robotic applications. The architecture includes generic real-time data collection tools, diagnosis and error handling modules, and user-friendly interfaces. To demonstrate the benefits of combining and easily integrating different robotic skills using the architecture, two flexible manipulation strategies have been developed to enhance the pest detection in its early state and to perform targeted spraying in simulated and field commercial greenhouses. Besides, an additional use-case has been included to demonstrate the applicability of the architecture in other industrial contexts.This work was supported in part by the GreenPatrol European Project through the European GNSS Agency by the European Union's (EU) Horizon 2020 Research and Innovation Program under Grant 776324 [11]. Documen

    A Benchmarking of Learning Strategies for Pest Detection and Identification on Tomato Plants for Autonomous Scouting Robots Using Internal Databases

    No full text
    Greenhouse crop production is growing throughout the world and early pest detection is of particular importance in terms of productivity and reduction of the use of pesticides. Conventional eye observation methods are nonefficient for large crops. Computer vision and recent advances in deep learning can play an important role in increasing the reliability and productivity. This paper presents the development and comparison of two different approaches for vision based automated pest detection and identification, using learning strategies. A solution that combines computer vision and machine learning is compared against a deep learning solution. The main focus of our work is on the selection of the best approach based on pest detection and identification accuracy. The inspection is focused on the most harmful pests on greenhouse tomato and pepper crops, Bemisia tabaci and Trialeurodes vaporariorum. A dataset with a huge number of infected tomato plants images was created to generate and evaluate machine learning and deep learning models. The results showed that the deep learning technique provides a better solution because (a) it achieves the disease detection and classification in one step, (b) gets better accuracy, (c) can distinguish better between Bemisia tabaci and Trialeurodes vaporariorum, and (d) allows balancing between speed and accuracy by choosing different models
    corecore